The large model Yi-34B uses the LLaMA architecture but has changed tensor names, leading to controversy. Model performance includes a context window size exceeding 200k, processing 400,000 Chinese characters at once, and achievements in the Hugging Face Global Open Source Model Rankings. This has led the community to question whether Yi-34B has substantively altered the code and whether there is false advertising and license violations. The Zero One Everything company responded that the changes were based on the GPT structure, and that training requirements led to the renaming of code. The controversy focuses on whether the model violates open source agreements and the transparency of performance parameters and code modifications.